Challenges and Innovations in Pharmaceutical Products Development

نویسنده

  • Frank Rockhold
چکیده

At FDA as well as industry, we promote powerful test with type I error rate controlled. It leads to various tests using different standard error or confidence interval. We often ignored that the confidence interval should be consistent with the test. It happens also often in equivalence testing when 90% confidence interval is used to decide if the two treatments are equivalent. I examined cases with normal and binary data. We will extend it to cover Poisson and survival data. Yi Tsong received his Ph.D. in Statistics from the University of North Carolina at Chapel Hill in 1979. He did his post-doctoral training in cardiovascular prevention and biostatistics at Northwestern Medical School (1978-1980). He worked as senior statistician in pattern recognition at Lockheed Engineering and Management Company (1981-1983) and biostatistical consultant at the University of Texas Medical Branch at Galveston (1984-1987) before joining FDA. He served as team leader of postmarketing risk assessment and statistical reviewer of NDA submission of critical care and pain relief products. He is currently the Division Director and Acting Team Leader for statistical team of Chemistry and Manufacturing Control. He specializes in postmarketing risk assessment, drug manufacturing process control and quality assurance, active control noninferiority/equivalence tests, adaptive designs and QTc trials. He received 8 CDER and 12 FDA level awards for contributions in postmarketing drug risk assessment, for advisory on CDER postmarketing risk assessment external contracts, medication errors, quality control evaluation, drug compliance, in vitro bioequivalence, drug compliance, drug abuse potential studies, setting quota of scheduled substances, adaptive design and non-inferiority tests, et al. He publishes frequently in numerous professional journals. He served as Treasurer, Board Director and President of International Chinese Statistical Association. He serves also as the Associate Editor of Statistics in Medicine and Journal of Biopharmaceutical Statistics. Parallel Session Abstract Parallel Session 1: Bioequivalence and Biosimilars Organizers: Victoria Chang (Boehringer-Ingelheim) and Yi Tsong (FDA) Chair: Yi Tsong (FDA) Victoria Chang (Boehringer-Ingelheim) “Sample Size Determination for a ThreeArm Equivalence Trial of Poisson and Negative Binomial Responses” Assessing equivalence or similarity has drawn much attention recently as the US pharmaceutical industry is under threat from biologics patent cliff. To claim equivalence between the test treatment and the reference treatment when assay sensitivity is wellestablished from historical data, one has to demonstrate both superiority of the test treatment over placebo and equivalence between the test treatment and the reference treatment. Thus, there is urgency for practitioners to derive a practical way to calculate sample size for a threearm equivalence trial. In this paper, we derive power function and discuss sample size requirement for a three-arm equivalence trial with Poisson and negative binomial clinical endpoints as an extension to the prior research on continuous endpoints. In addition, we examine the effect of the dispersion parameter on the power and the sample size by varying its coefficient from small to large. In extensive numerical studies, we demonstrate that required sample size heavily depends on the dispersion parameter. Therefore, misusing a Poisson model for negative binomial data may easily lose power up to 20%, depending on the value of the dispersion parameter. Meiyu Shen (FDA) “Distributional assumptions for AUC, Cmax and Tmax” In a typical pharmacokinetic bioequivalence study with a single dose administration, one of the drug products is a reference formulation and the other a test formulation. Each subject is administered both formulations in a randomized two-period crossover design. A concentrationtime profile is determined for each subject given each formulation. Each single concentrationtime profile can be modeled by a pharmacokinetic compartmental model. Many software programs exist for estimating the pharmacokinetic parameters such as the absorption rate, the volume of distribution, etc. Then, AUC, Cmax, and Tmax can be obtained from the fitted pharmacokinetic model. In spite of these elaborate pharmacokinetic models, the AUC, Cmax, and Tmax are obtained from the nonparametric method for bioequivalence assessment. In practice, the univariate response variables such as log(AUC) and log(Cmax) are often assumed to follow a normal distribution without much experimental data support. For instance, an investigation of observed pharmacokinetic studies was based on numbers of subjects from 29 to 69 and so the power of the Shapiro-Wilk test to detect departures from either distribution (lognormal or normal) may have been limited. In this presentation, we investigate the normality assumption of log(AUC) or log(Cmax) using pharmacokinetic compartmental models typically used to describe concentration profiles over time. In particular, if data is generated using the simplest pharmacokinetic models (namely one and two compartment models), will it ultimately lead to deciding which distribution of log(AUC), log(Cmax), or log(Tmax) is most plausible? Jean Pan (Amgen) “Statistical Considerations in Biosimilar Clinical Development” Clinical development for a biosimilar product is aimed at demonstrating similarity to a reference biologic product. It is not intended to prove clinical safety and efficacy all over again. With this understanding, there are specific challenges to the design and analysis of biosimilar clinical studies. In this talk we will discuss several statistical strategies and challenges for biosimilar clinical studies, including selection of endpoints, determination of margins, and evaluation of the totality-of-evidence. Experiences from working with regulatory agencies on clinical development of some biosimilar molecules will be shared from a statistical perspective. Cassie (Xiaoyu) Dong (FDA) “Statistical Approaches to Demonstrate Analytical Similarity of Quality Attributes” In conventional equivalence testing, the equivalence margin is usually fixed. E.g. (80%, 125%) in PK studies. However, such a fixed margin may not be suitable for highly variable medicines or for testing quality attributes of biologics. Considering those practical issues, we proposed to establish the equivalence margin as a constant times the variability of the reference product. This constant is obtained by achieving a given power with a pre-specified samples sizes and the true mean difference. With this equivalence margin, test statistics of the equivalence testing on the mean values need to be carefully derived and examined. When the variability of the reference product is a known constant, we developed an exact t-statistics. When the reference variability is unknown, we need to consider the variability of the sample variance when we conduct the hypothesis testing. We developed approximate approaches, confidence interval approaches, and exact statistics. We investigated type I error rate, power function of our proposed statistical methods for each scenario. Parallel Session 2: Bayesian Non-Inferiority Trials Organizer: Guochen Song (Quintiles) Chair: Guochen Song (Quintiles) “Controlling Frequentist Type I and Type II Error in Bayesian Non-inferiority Trials: a Case Study” In phase III biosimilar studies, utilizing Bayesian method and borrowing information from historical data for the control arm can effectively reduce the sample size. From the Frequntist point of view, however, the type I error from such studies can be inflated if not properly controlled. This case study demonstrates how to control the type I and type II error together in a setting where the endpoint variable is binary and the conjugate beta prior is assumed. Fanni Natanegara (Eli Lilly) “Bayesian considerations for non-inferiority clinical trials with case examples” The gold standard for evaluating treatment efficacy of a pharmaceutical product is a placebo controlled study. However, when a placebo controlled study is considered to be unethical or impractical to conduct, a viable alternative is a non-inferiority (NI) study in which an experimental treatment is compared to an active control treatment. The objective of such study is to determine whether the experimental treatment is not inferior to the active control by a pre-specified NI margin. The availability of historical studies in designing and analyzing NI study makes these types of studies conducive to the use of the Bayesian approach. In this presentation, we will highlight case examples for utilizing Bayesian methods in NI study and provide recommendations. Sujit Ghosh (NC State University and SAMSI) “Robust Bayesian Methods for Non-Inferiority Tests Based on Dichotomous Data” In a non-inferiority trial, the experimental treatment is compared against an active control instead of placebo. The goal of this study is often to show that the experimental treatment is non-inferior to the control by some pre-specified margin. The standard approach for these problems, which relies on asymptotic normality, usually requires large sample size to achieve some desired power level. This talk presents robust Bayesian approaches based on Bayes factor and posterior probability for testing non-inferiority in the context of two-sample dichotomous data. A novel aspect of the proposed Bayesian methods is that the cut-off value for Bayes factors and posterior probabilities are determined from the data that approximately controls the overall errors. Results based on simulated data indicate that both of the proposed Bayesian approaches provide significant improvement in terms of statistical power as well as the total error rate over the popularly used frequentist procedures. This in turn indicates that the required sample size to achieve certain power level could be substantially lowered by using the proposed Bayesian approaches. [This is a joint work with Muhtar Osman and major part of the talk is based on two published papers: 1 and 2] Parallel Session 3: Data Monitoring Committees Organizer: Michael Pencina (Duke) Chair: Frank Rockhold (GSK) “Benefit to risk considerations and methods applied to the ongoing monitoring of clinical trials” The overall goal of the clinical trial is to assess a primary objective and endpoint (usually a benefit) over the background of secondary endpoints including patient safety. The objective of the IDMC is to integrate the information efficacy and safety information in some fashion to make ongoing decisions about whether to continue the trial as is, have the design altered, or prematurely discontinue based on the benefits and harms they are observing in the trial. Thus in some fashion the IDMC is tasked with creating a “benefit to risk” picture for the trial patients and future patients. The science of Benefit to risk for quantitatively summarizing completed trials (one or many) has evolved over the past decade. The purpose of this talk to is to explore how one might apply these techniques in amore more structured and systematic way in an ongoing IDMC setting. Some things to be discussed are a review basic IDMC and B-R practices and processes, examples of how to integrate data in an evolving manner as part of data monitoring, use graphical and other methods usually used to display BR data at the end of the trial adapted to interim looks, and an example reworked in a BR framework over the life of a trial using methods outlined in 4. Some thought questions will also be proposed around what, if any, impact of type I error adjustment (if any) on the data review and presentation and the impact of regulatory guidelines, if any, on these recommendations. The intent of this talk is to start the discussion of combining classical IDMC process with the more recent advances in Benefit to Risk methodology. Karim Calis (FDA) “Challenges and Opportunities in Data Monitoring and Trial Oversight” Clinical trial oversight requires coordination and review by a number of groups and committees whose diverse focus includes safety, quality, ethics, adjudication, operations, and logistics. Although some of these evolving roles and responsibilities invariably overlap, independent data monitoring committees (IDMCs) hold a unique place in trial oversight. IDMCs periodically review the accumulating safety and efficacy data by treatment group and advise the sponsor on whether to continue, modify, or terminate a trial based on risk-benefit assessment. They also play a critical role in assessing the validity and integrity of the trial to enhance its potential to generate reliable findings. IDMCs typically oversee a single trial but occasionally review multiple related trials. Emerging functions of IDMCs include the monitoring of pragmatic clinical trials and perhaps even the entire portfolio of research related to an investigational product throughout its clinical development life cycle. IDMCs should be composed of qualified individuals with knowledge of ethical principles and expertise in biostatistics, research methodology, and relevant areas of science and clinical medicine. IDMC members must be independent of the sponsor and afforded adequate resources and flexibility to perform their duties. Roles and responsibilities of the IDMC—including contingency and communication plans—should be clearly delineated in a succinct, well-organized charter that empowers IDMC members. Although IDMCs have been established for decades, the transformation of the clinical trial landscape has created new opportunities as well as scientific and regulatory challenges in the oversight of clinical trials. Bob Bigelow (Duke) “Interim data analysis: Distinguishing signal from noise” The goal of many clinical trials is to reach a decision on comparative treatment effects based entirely on information from the trial. Pre-specification of endpoints, sample size, acceptable type I and II errors, and statistical analyses increase the chances that hypothetical treatment differences (or similarities) can be demonstrated in the presence of background noise. However, patient safety information from an ongoing trial is often not sufficient for an IDMC to make conclusive recommendations, and reliance on expert clinical judgment and familiarity with external data are necessary. In this presentation we will discuss challenges of interim safety assessment and consider methods to improve statistical rigor in IDMC analyses. Susan Halabi (Duke) “Group Sequential Design: Uses and Abuses” Group sequential design (GSD) is considered part of standard statistical practice has been developed for interim monitoring (and potential termination) of clinical trials to minimize the role of subjective judgment. Most randomized clinical trials include strategies for terminating the trial early if a treatment arm is found to be either effective or harmful to the patients. Although GSDs serve as an aid in monitoring throughout the trial, the decision to stop a trial early is complex. In this talk, the consequences of terminating a trial early will be discussed with an emphasis on statistical issues related to the estimation of the treatment effect and the analysis and interpretation of the primary and secondary endpoints. Several examples of oncology trials that were stopped early for superiority will be considered. Parallel Session 4: Randomized Concentration-Controlled Trials Organizer: Russell Reeve (Quintiles) Chair: Seth Berry (Quintiles) “Pharmacokinetic/pharmacodynamic modeling and simulation in the design and analysis of RCCTs” Pharmacokinetic (PK) and pharmacodynamic (PD) modeling is needed to design and understand the properties of a randomized concentration-controlled trial (RCCT). We will cover the exposure-response causal chain principles underlying these designs; will explore how PK variability in concentration-time profiles impacts dose-response analyses; and will discuss how implementing RCCTs can help control for the variability in PK, improving the signal while reducing noise in the PD properties of the biological system, and consequently enhancing the trial design. Russell Reeve (Quintiles) “Efficiency of randomized concentration-controlled trials relative to randomized dose-controlled trials, and application to personalized dosing trials” The literature on randomized concentration-controlled trials (RCCTs) is surveyed, comparing this trial design to the more traditional randomized dose-controlled trial (RDCT). It is shown that RCCTs require smaller sample sizes than RDCTs for the same power, and that they elicit more informative information on the exposure-response relationship. RCCTs are similar in spirit to personalized titration designs, and the relationship is explored, where it is shown that personalized titration designs have similar power, even in the face of categorical responses, such as a rheumatoid arthritis trial using the binary ACR20 as the primary endpoint. Michael Hale (Baxter) “Practical Reasons Your Randomized Concentration Controlled Trial Might Flop” The Randomized Concentration Controlled Trial (RCCT) is based on the idea that the clinical response to a dose of drug is mediated through exposure, and so randomizing people to different exposure targets should reduce “experimental noise”, compared with randomizing to different doses. Implementing an RCCT can be very challenging, however, and few have actually been performed. This talk will consider some of the practical hurdles which must be overcome, based on the speaker’s experience in designing and implementing a well-known RCCT for mycophenolate mofetil in renal transplantation. Parallel Session 5: Subgrouping Analysis Organizer: Xuan Liu (Abbvie) Chair: Martin King (Abbvie) “Identifying Subgroups in Product Labeling: Two Recent Case Studies” We evaluate 2 recently approved new drugs for which subgroups were identified in product labeling for potentially different treatment. For each case, trial results and labeling decisions are reviewed in light of the EMA draft guideline on subgroups in confirmatory clinical trials. We discuss the relative contributions of various factors, including evidence of heterogeneity, biological plausibility, pre-specification, and risk of misclassification. Michael Rosenblum (JHU) “Optimal, Two Stage, Adaptive Enrichment Designs for Randomized Trials, using Sparse Linear Programming” Adaptive enrichment designs involve preplanned rules for modifying enrollment criteria based on accruing data in a randomized trial. Such designs have been proposed, for example, when the population of interest consists of biomarker positive and biomarker negative individuals. The goal is to learn which populations benefit from an experimental treatment. Two critical components of adaptive enrichment designs are the decision rule for modifying enrollment, and the multiple testing procedure. We provide the first general method for simultaneously optimizing both of these components for two stage, adaptive enrichment designs. We minimize expected sample size under constraints on power and the familywise Type I error rate. It is computationally infeasible to directly solve this optimization problem since it is not convex. The key to our approach is a novel representation of a discretized version of this optimization problem as a sparse linear program. We apply advanced optimization methods to solve this problem to high accuracy, revealing new, approximately optimal designs. Shuai Chen (University of Wisconsin) “A Flexible Framework for Treatment Scoring in Clinical Studies” To identify subgroups of patients who have different responses to different treatments, one essentially needs to investigate interactions between the treatments and covariates. Instead of using the traditional outcome-modeling approach, we propose two alternative frameworks for treatment scoring in both observational studies and clinical trials. In particular, we construct personalized scores ranking the patients according to their potential treatment effects. In contrast to outcome-modeling, under our framework, there is no need to model the main effects of covariates. The proposed methods are quite flexible and we show that several recently proposed estimators can be represented as special cases within our frameworks. As a result, some estimators which were originally proposed for randomized clinical trials can be extended to observational studies. Moreover, our approaches allow regularization in presence of a large number of covariates. Many powerful M-estimation technologies can be used in estimation. Parallel Session 6: Biosimilars II Organizer: Lanju Zhang (Abbvie) and Guochen Song (Quintiles) Chair: Lanju Zhang (Abbvie) “How to set up biosimilarity bounds in biosimilar product development” FDA published three biosimilar guidances in 2012 and one guidance in 2014. With the approval of the first biosimilar product in March 2015, FDA cleared a regulatory pathway for biosimilar product development in US. This calls for stepwise development approach, including analytical biosimilarity, pharmacological biosimilarity and clinical biosimilarity. A tiered approach has been proposed for demonstrating analytical biosimilarity, requiring more statistical rigor with increasing criticality of quality attributes. Specifically, critical quality attributes in tier 1 call for a head to head comparison between reference product and biosimilar product, using a statistical equivalence test to show biosimilarity based on appropriate equivalence bounds. The key is therefore to set up these goalposts. In this talk, we will review some methods to set up equivalence bounds, with a focus on 1.5 times standard deviation of reference product, which was used in the briefing document for the FDA’s first biosimilar product approval. This is a joint work with Sutan Wu from SutanStats. Russell Reeve (Quintiles) “Modeling in Biosimilars” Models that describe the time-course of trial outcomes can help with trial design and post-trial analysis. These models have been used in rheumatoid arthritis, psoriasis, oncology, and other disease areas. In this talk, we describe the models, which extend beyond placebo modeling to include dose-response modeling as well. These models allow for more precise estimation of treatment effect, provide a scientific rationale for time point selection, and can yield better analyses of the data, particularly for the equivalence analyses required for regulatory approval. While not yet in widespread use, some promising results have recently been reported in the medical literature. The author will discuss the benefits of using time-progression models compared with the standard analyses, and will show that appropriately utilizing these models can decrease sample size, maintain Type I error rates in the trials with smaller equivalence bounds, while increasing power, a win for consumers, sponsors, and regulators alike. Thomas Gwise (FDA) “Points to Consider for Biosimilar Clinical Studies” The Biologics Price Competition and Innovation Act of 2009 (BPCI Act) amends the PHS Act and other statutes to create an abbreviated licensure pathway for biological products shown to be biosimilar to an FDA-licensed biological reference product. This presentation will discuss the objectives of the BPCI Act to place into context the role of clinical studies in establishing a product as biosimilar to a reference product. The recent biosimilar application reviewed by FDA’s Oncologic Drug Advisory Committee will serve as an example to explore clinical study design issues particular to biosimilars, including margin determination. Sujit Ghosh (NC State University and SAMSI) “Dynamic Model Based Methods to Test for Biosimilarity” In recent years there have been a lot of interest to test for similarity between biological drug products, commonly known as biologics. Biologics are large and complex molecule drugs that are produced by living cells and hence these are sensitive to the environmental changes. In addition, biologics usually induce antibodies which raises the safety and efficacy issues. The manufacturing process is also much more complicated and costly than the small-molecule generic drugs. Because of these complexities and inherent variability of the biologics, the testing paradigm of the traditional generic drugs cannot be directly used to test for biosimilarity. Taking into account some of these concerns we propose a dynamic model based methodology that takes into consideration of the entire time course of the study based a class of flexible models. The empirical results show that the proposed approach is more sensitive than the classical equivalence test approach, and require much less sample size for detecting biosimilarity. [This is a joint work with Dr. Yifang Li, Novartis Inc.] Parallel Session 7: Advanced Survival Analysis Organizers: Marlina Nasution (Parexel) and Changbin Guo (SAS) Chair: Marlina Nasution (Parexel) Peter Jakobs (Parexel) “Analysis of Recurrent Adverse Events of Special Interest: an Application for Hazard-Based Models” For decades, safety risks on study or product level have been summarized by incidence estimates: typically, with $N_j$ denoting the number of subjects who received at least one dose of study treatment $j$ and $n_{j,x}$ denoting the number of subjects in treatment group $j$ who experienced at least once an adverse event $x$ (e.g., categorized as MedDRA Preferred Term), such an incidence is estimated by $n_{j,x} /N_j$ times 100%. Treatment groups have been compared by related estimates like risk difference, risk ratio -and odds ratio. Timing, duration, recurrence as well as duration of adverse events has been ignored frequently. A trend for utilizing time-to-first-event methodology (like cumulative incidence estimates and Cox proportional hazard regression models) in safety assessments has been observed over the last years, but this approach is still limited. My presentation will outline some statistical methodology for evaluating risks for recurrent (or otherwise complex) safety events of special interest, focusing on hazard-based models for counting processes and multi-state models. For example, states in a multi-state model for adverse events of special interest may be defined by administration of certain concomitant medication(s) over the course of the study (that either change the risk for such adverse events or are used to treat such adverse events). If time allows, a fictitious case study analysis will be presented as well. Audrey Boruvka (University of Michigan) “Understanding the effect of treatment on progression-free survival and overall survival” Cancer clinical trials are routinely designed on the basis of event-free survival time where the event of interest may represent a complication, metastasis, relapse, or progression. This talk is concerned with a number of statistical issues arising with use of such endpoints including interpretation and dual censoring schemes. We consider methods to evaluate this endpoint based on the Cox model. However, even when treatment is randomized, the resulting hazard ratios have limited interpretation as causal effects. We point to some ways in which one can draw causal inferences in this particular setting. This talk is based on joint work with Richard J. Cook and Leilei Zeng at the University of Waterloo. Changbin Guo (SAS) “Current Methods in Survival Analysis Using SAS/STAT® Software” Interval censoring occurs in clinical trials and medical studies when patients are assessed only periodically. As a result, an event is known to have occurred only within two assessment times. Traditional survival analysis methods for right-censored data are not applicable, and so specialized methods are needed for interval-censored data. The goal of this presentation is to give an overview of these techniques and their recent implementation in SAS software, both for estimation and comparison of survival functions as well as for proportional hazards regression. Competing risks arise in studies when individuals are subject to a number of potential failure events and the occurrence of one event may impede the occurrence of other events. A useful quantity in competing-risks analysis is the cumulative incidence function, which is the probability sub-distribution function of failure from a specific cause. This presentation describes how to use the LIFETEST procedure to compute the nonparametric estimate of the cumulative incidence function and test for group differences. In addition, this presentation will describe two approaches that are available with the PHREG procedure for evaluating the relationship of covariates to the cause-specific failure. The first approach models the cause-specific hazard, and the second approach models the cumulative incidence (Fine and Gray 1999). Parallel Session 8: Enrichment Design for Clinical Trials Organizer: Jane Qian (Abbvie) Chair: Bo Yang (Abbvie) “Enrichment Design with Patient Population Augmentation” Clinical trials can be enriched on subpopulations that may be more responsive to treatments to improve the chance of trial success. In 2012 FDA issued a draft guidance to facilitate enrichment design, where it pointed out the uncertainty on the subpopulation classification and on the treatment effect outside of the identified subpopulation. We consider a novel design strategy where the identified subpopulation (biomarker-positive) is augmented by some biomarker-negative patients. Specifically, after sufficiently powering biomarker-positive subpopulation we propose to enroll biomarker-negative patients, enough to assess the overall treatment benefit. We derive a weighted statistic for this assessment, correcting for the disproportionality of biomarker-positive and biomarker-negative subpopulations under enriched trial setting. Screening information is utilized for weight determination. This statistic is an unbiased estimate of the overall treatment effect as that in all-comer trials, and is the basis to power for the overall treatment effect. For analysis, testing will be first performed on biomarker-positive subpopulation; only if treatment benefit is established in this subpopulation will overall treatment effect be tested using the weighted statistic. [Joint with Yijie Zhou from AbbVie] Shu-Chih Su (Merck) “A Population-Enrichment Adaptive Design Strategy for Vaccine Efficacy Trial” Adaptive design has the flexibility allowing pre-specified modifications to an ongoing trial to mitigate the potential risk associated with the assumptions made at the design stage. It allows studies to include broader target patient population and to evaluate the performance of vaccine/drug across subpopulations simultaneously. Our work is motivated by a Phase III eventdriven vaccine efficacy trial. Two target patient populations are being enrolled with the assumption that vaccine efficacy can be demonstrated based on the two patient subpopulations combined. It is recognized due to the heterogeneity of the patient characteristics, the two subpopulations might respond to the vaccine differently. i.e., the vaccine efficacy (VE) in one population could be lower than that in the other. To maximize the probability of demonstrating vaccine efficacy in at least one patient population while taking advantage of combining two populations in one single trial, an adaptive design strategy with potential population enrichment is developed. Specifically, if the observed vaccine efficacy at interim for one subpopulation is not promising to warrant carrying forward, the enrollment in the other population can be enriched. Simulations were conducted to evaluate the operational characteristics of different timing and futility boundaries for interim analysis. This population enrichment design provides a more efficient way as compared to the conventional approaches with several target subpopulations. If executed and planned with caution, it can improve the probability of having a successful trial. [Joint with Ivan S.F. Chan from Merck] Hui Quan (Sanofi) “Adaptive Patient Population Selection Design in Clinical Trials” For the success of a new drug development, it is crucial to select the sensitive patient populations. To potentially reduce timeline and cost, we may apply a two-stage adaptive patient population selection design to a therapeutic trial. In such a design, based on early results of the trial, patient population(s) will be selected/determined for the final stage and analysis. Because of this adaptive nature and the multiple between-treatment comparisons for multiple populations, an alpha adjustment is necessary. In this paper, we propose a closed step down testing procedure to assess treatment effects on multiple populations and a weighted combination test to combine data from the two stages after sample size adaptation. Computation/simulation is used to compare the performances of the proposed procedure and the other multiplicity adjustment procedures. A trial simulation is presented to illustrate the application of the methods. [Joint with Dongli Zhou, Pierre Mancini, Yi He and Gary Koch from Sanofi] Sue-Jane Wang (FDA) “FDA draft guidance on enrichment strategy and FDA guidance on biomarkers for enrichment as drug development tools” Enrichment is a strategy frequently used in designing clinical trials. There are many ways one can design a trial using an enrichment strategy. FDA draft guidance on enrichment strategy for therapeutic intervention trials distinguishes enrichment design types, gives their rationales and shares enrichment examples across multiple therapeutic areas for a drug development. In contrast, FDA guidance on biomarkers as drug development tools focuses on enrichment strategy based on biomarker characteristics and applicable for multiple drug development programs. This talk will share FDA’s current thinking on these two documents. Parallel Session 9: Dose Finding and Selection in Clinical Phase Organizers: Qiqi Deng (Boehringer-Ingelheim) and Joshua Betcher (Quintiles) Chair: Susan Wang (Boehringer-Ingelheim) Rebhi Bsharat (Quintiles) “Using utility index to evaluate risk-benefit of several doses to help in dose selection” In early phase studies where the sample size is small and several endpoints are used for evaluation to choose candidate doses for latter stage of drug development, the challenge is choosing the best doses that have maximum efficacy and the best safety profile. Treatment arms including active control and/or placebo could give conflicting messages when evaluated based on different endpoints. A technique is presented to summarize overall utility of each dose and compare different doses to placebo or active control using a clinical utility index which is a multivariate utility function that summarizes the utility for each subject across all endpoints. Active doses are compared to placebo or active control using bootstrap confidence intervals. The technique supports informed-decision making based on evaluation of different scenarios using simulation. Qiqi Deng (Boehringer-Ingelheim) “A robust method using ordinal linear contrast test to design dose ranging study” Nowadays, many sponsors are working to speed up the clinical development process. A commonly used strategy is to combine the Proof of Concept (PoC) and the dose-ranging clinical studies into a single trial at the early Phase II development. In such trials, the primary objective is to establish the POC and make go-no go decision. And the important secondary objective is to identify a range of doses to move into phase III. We propose to use ordinal linear contrast test (also referred to as trend test) to design such a trial, which is easy to communicate to nonstatisticians, simple to implement, and provides robust performance for different dose response curves under monotonic assumption. We will also discusses the implication of different ways of allocating patients to each treatment group, under a given total sample size – which is often limited by budget and ethical concerns. Yaning Wang (FDA) “Regulatory Application of Exposure-Response Analyses in Dose Selection” Exposure-response analyses are routinely conducted by Pharmacometric reviewers at FDA to address the key question: is the dose/dosing regimen selected consistent with the exposureresponse relationships for both efficacy and safety? Such analyses are used to support the approved dose/dosing regimen and justify additional studies such as post-marketing requirement (PMR) or post-marketing commitment (PMC) studies to further optimize the dose/dosing regimen. Case studies will be shared to demonstrate the application of exposureresponse analyses in regulatory decision making process. Li Wang (Abbvie) “Enhanced understanding of MCPMod in Dose-ranging Studies” MCPMod (Bretz et al, 2005 and Pinheiro et al. 2014) is the approach to provide additional insights for the selection of the “best” underlying dose-response model and controls FWER at the POC stage for model selection. The target dose is selected from the final model and is not bounded in the candidate set of the doses evaluated in the trial. They enable the more informative Phase II trial study design to provide a more solid basis for all subsequent dose selection strategies and decisions. Specifically, MCPMod approach receives regulatory supportive opinion, e.g. EMA – CHMP qualification opinion on 10/01/2013. In this research, we further evaluated the performance of MCPMod on MED estimation using weighted and unweighted AIC criteria and the impact of number of doses and different prior assumptions on model selection and restricted MED estimation. Poster Session Abstract Who will benefit from antidepressants in the acute treatment of bipolar depression? A follow up observational data analysis of STEP-BD

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

What Health System Challenges Should Responsible Innovation in Health Address? Insights From an International Scoping Review

Background While responsible innovation in health (RIH) suggests that health innovations could be purposefully designed to better support health systems, little is known about the system-level challenges that it should address. The goal of this paper is thus to document what is known about health systems’ demand for innovations.   Methods We searched 8 databases to perfo...

متن کامل

Small and Medium Enterprises and Biopharmaceutical Innovations in Africa: Challenges and Prospects

Biopharmaceuticals can be described as medicines or medicinal products manufactured through biotechnological processes with links to biological sources especially those of live organisms or their active components. The biopharmaceutical industry is presently experiencing tremendous revenue growth rates projected at more than $167 billion worldwide in 2015. There are more than 500 biopharmaceuti...

متن کامل

Best Practices and Innovations for Managing Codeine Misuse and Dependence.

PURPOSE Promoting and ensuring safe use of codeine containing medicines remains a public health issue given the rise in reporting of misuse and dependence particularly in countries where available over-the-counter (OTC). The aim of this unique study was to identify best practices in management of opioid abuse and dependence, particularly codeine, and innovations to meet challenges surrounding s...

متن کامل

Challenges for the pharmaceutical technical development of protein coformulations.

OBJECTIVES This review discusses challenges to stability, analytics and manufacturing of protein coformulations. Furthermore, general considerations to be taken into account for the pharmaceutical development of coformulated protein drug products are highlighted. KEY FINDINGS Coformulation of two or more active substances in one single dosage form has recently seen increasing use offering sev...

متن کامل

Nurturing Societal Values in and Through Health Innovations; Comment on “What Health System Challenges Should Responsible Innovation in Health Address?”

Aligning innovation processes in healthcare with health system demands is a societal objective, not always achieved. In line with earlier contributions, Lehoux et al outline priorities for research, public communication, and policy action to achieve this objective. We endorse setting these priorities, while also highlighting a ‘commitment gap’ in collectively addressing system-level challenges....

متن کامل

An Overview of Challenges in Producing and Consuming Transgenic Products

The production and consumption of genetically modified (GM) products are highly controversial due to their environmental, health, and ethical impacts. Most of these disputes are caused by distrust of regulatory authorities, scientists, and technocratic decisions. Among all these concerns, health issues, allergenicity and antibiotic resistance are more important. Many of today's social developme...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2015